AAAI.2019 - Heuristic Search and Optimization

Total: 18

#1 A Two-Individual Based Evolutionary Algorithm for the Flexible Job Shop Scheduling Problem [PDF] [Copy] [Kimi]

Authors: Junwen Ding ; Zhipeng Lü ; Chu-Min Li ; Liji Shen ; Liping Xu ; Fred Glover

Population-based evolutionary algorithms usually manage a large number of individuals to maintain the diversity of the search, which is complex and time-consuming. In this paper, we propose an evolutionary algorithm using only two individuals, called master-apprentice evolutionary algorithm (MAE), for solving the flexible job shop scheduling problem (FJSP). To ensure the diversity and the quality of the evolution, MAE integrates a tabu search procedure, a recombination operator based on path relinking using a novel distance definition, and an effective individual updating strategy, taking into account the multiple complex constraints of FJSP. Experiments on 313 widely-used public instances show that MAE improves the previous best known results for 47 instances and matches the best known results on all except 3 of the remaining instances while consuming the same computational time as current state-of-the-art metaheuristics. MAE additionally establishes solution quality records for 10 hard instances whose previous best values were established by a well-known industrial solver and a state-of-the-art exact method.

#2 Greedy Maximization of Functions with Bounded Curvature under Partition Matroid Constraints [PDF] [Copy] [Kimi]

Authors: Tobias Friedrich ; Andreas Göbel ; Frank Neumann ; Francesco Quinzan ; Ralf Rothenberger

We investigate the performance of a deterministic GREEDY algorithm for the problem of maximizing functions under a partition matroid constraint. We consider non-monotone submodular functions and monotone subadditive functions. Even though constrained maximization problems of monotone submodular functions have been extensively studied, little is known about greedy maximization of non-monotone submodular functions or monotone subadditive functions. We give approximation guarantees for GREEDY on these problems, in terms of the curvature. We find that this simple heuristic yields a strong approximation guarantee on a broad class of functions. We discuss the applicability of our results to three real-world problems: Maximizing the determinant function of a positive semidefinite matrix, and related problems such as the maximum entropy sampling problem, the constrained maximum cut problem on directed graphs, and combinatorial auction games. We conclude that GREEDY is well-suited to approach these problems. Overall, we present evidence to support the idea that, when dealing with constrained maximization problems with bounded curvature, one needs not search for (approximate) monotonicity to get good approximate solutions.

#3 Heuristic Search Algorithm for Dimensionality Reduction Optimally Combining Feature Selection and Feature Extraction [PDF] [Copy] [Kimi]

Authors: Baokun He ; Swair Shah ; Crystal Maung ; Gordon Arnold ; Guihong Wan ; Haim Schweitzer

The following are two classical approaches to dimensionality reduction: 1. Approximating the data with a small number of features that exist in the data (feature selection). 2. Approximating the data with a small number of arbitrary features (feature extraction). We study a generalization that approximates the data with both selected and extracted features. We show that an optimal solution to this hybrid problem involves a combinatorial search, and cannot be trivially obtained even if one can solve optimally the separate problems of selection and extraction. Our approach that gives optimal and approximate solutions uses a “best first” heuristic search. The algorithm comes with both an a priori and an a posteriori optimality guarantee similar to those that can be obtained for the classical weighted A* algorithm. Experimental results show the effectiveness of the proposed approach.

#4 On the Optimal Efficiency of Cost-Algebraic A* [PDF] [Copy] [Kimi]

Authors: Robert C. Holte ; Sandra Zilles

Edelkamp et al. (2005) proved that A*, given an admissible heuristic, is guaranteed to return an optimal solution in any cost algebra, not just in the traditional shortest path setting. In this paper, we investigate cost-algebraic A*’s optimal efficiency: in the cost-algebraic setting, under what conditions is A* guaranteed to expand the fewest possible states? In the traditional setting, this question was examined in detail by Dechter & Pearl (1985). They identified five different situations in which A* was optimally efficient. We show that three of them continue to hold in the cost-algebraic setting, but that one does not. We also show that one of them is false, it does not hold even in the traditional setting. We introduce an alternative that does hold in the cost-algebraic setting. Finally, we show that a well-known result due to Nilsson does not hold in the general cost-algebraic setting but does hold in a slightly less general setting.

#5 Running Time Analysis of MOEA/D with Crossover on Discrete Optimization Problem [PDF] [Copy] [Kimi]

Authors: Zhengxin Huang ; Yuren Zhou ; Zefeng Chen ; Xiaoyu He

Decomposition-based multiobjective evolutionary algorithms (MOEAs) are a class of popular methods for solving multiobjective optimization problems (MOPs), and have been widely studied in numerical experiments and successfully applied in practice. However, we know little about these algorithms from the theoretical aspect. In this paper, we present a running time analysis of a simple MOEA with crossover based on the MOEA/D framework (MOEA/D-C) on four discrete optimization problems. Our rigorous theoretical analysis shows that the MOEA/D-C can obtain a set of Pareto optimal solutions to cover the Pareto front of these problems in expected running time apparently lower than the one without crossover. Moreover, the MOEA/D-C only needs to decompose an MOP into a few scalar optimization subproblems according to several simple weight vectors. This result suggests that the use of crossover in decomposition-based MOEA can simplify the setting of weight vector for different problems and make the algorithm more efficient. This study theoretically explains why some decomposition-based MOEAs work well in computational experiments and provides insights in design of MOEAs for MOPs in future research.

#6 Bézier Simplex Fitting: Describing Pareto Fronts of´ Simplicial Problems with Small Samples in Multi-Objective Optimization [PDF] [Copy] [Kimi]

Authors: Ken Kobayashi ; Naoki Hamada ; Akiyoshi Sannai ; Akinori Tanaka ; Kenichi Bannai ; Masashi Sugiyama

Multi-objective optimization problems require simultaneously optimizing two or more objective functions. Many studies have reported that the solution set of an M-objective optimization problem often forms an (M − 1)-dimensional topological simplex (a curved line for M = 2, a curved triangle for M = 3, a curved tetrahedron for M = 4, etc.). Since the dimensionality of the solution set increases as the number of objectives grows, an exponentially large sample size is needed to cover the solution set. To reduce the required sample size, this paper proposes a Bézier simplex model and its fitting algorithm. These techniques can exploit the simplex structure of the solution set and decompose a high-dimensional surface fitting task into a sequence of low-dimensional ones. An approximation theorem of Bézier simplices is proven. Numerical experiments with synthetic and real-world optimization problems demonstrate that the proposed method achieves an accurate approximation of high-dimensional solution sets with small samples. In practice, such an approximation will be conducted in the postoptimization process and enable a better trade-off analysis.

#7 Fine-Grained Search Space Classification for Hard Enumeration Variants of Subset Problems [PDF] [Copy] [Kimi]

Authors: Juho Lauri ; Sourav Dutta

We propose a simple, powerful, and flexible machine learning framework for (i) reducing the search space of computationally difficult enumeration variants of subset problems and (ii) augmenting existing state-of-the-art solvers with informative cues arising from the input distribution. We instantiate our framework for the problem of listing all maximum cliques in a graph, a central problem in network analysis, data mining, and computational biology. We demonstrate the practicality of our approach on real-world networks with millions of vertices and edges by not only retaining all optimal solutions, but also aggressively pruning the input instance size resulting in several fold speedups of state-of-the-art algorithms. Finally, we explore the limits of scalability and robustness of our proposed framework, suggesting that supervised learning is viable for tackling NP-hard problems in practice.

#8 On the Time Complexity of Algorithm Selection Hyper-Heuristics for Multimodal Optimisation [PDF] [Copy] [Kimi]

Authors: Andrei Lissovoi ; Pietro S. Oliveto ; John Alasdair Warwicker

Selection hyper-heuristics are automated algorithm selection methodologies that choose between different heuristics during the optimisation process. Recently selection hyperheuristics choosing between a collection of elitist randomised local search heuristics with different neighbourhood sizes have been shown to optimise a standard unimodal benchmark function from evolutionary computation in the optimal expected runtime achievable with the available low-level heuristics. In this paper we extend our understanding to the domain of multimodal optimisation by considering a hyper-heuristic from the literature that can switch between elitist and nonelitist heuristics during the run. We first identify the range of parameters that allow the hyper-heuristic to hillclimb efficiently and prove that it can optimise a standard hillclimbing benchmark function in the best expected asymptotic time achievable by unbiased mutation-based randomised search heuristics. Afterwards, we use standard multimodal benchmark functions to highlight function characteristics where the hyper-heuristic is efficient by swiftly escaping local optima and ones where it is not. For a function class called CLIFFd where a new gradient of increasing fitness can be identified after escaping local optima, the hyper-heuristic is extremely efficient while a wide range of established elitist and non-elitist algorithms are not, including the well-studied Metropolis algorithm. We complete the picture with an analysis of another standard benchmark function called JUMPd as an example to highlight problem characteristics where the hyper-heuristic is inefficient. Yet, it still outperforms the wellestablished non-elitist Metropolis algorithm.

#9 Evolving Action Abstractions for Real-Time Planning in Extensive-Form Games [PDF] [Copy] [Kimi]

Authors: Julian R. H. Mariño ; Rubens O. Moraes ; Claudio Toledo ; Levi H. S. Lelis

A key challenge for planning systems in real-time multiagent domains is to search in large action spaces to decide an agent’s next action. Previous works showed that handcrafted action abstractions allow planning systems to focus their search on a subset of promising actions. In this paper we show that the problem of generating action abstractions can be cast as a problem of selecting a subset of pure strategies from a pool of options. We model the selection of a subset of pure strategies as a two-player game in which the strategy set of the players is the powerset of the pool of options— we call this game the subset selection game. We then present an evolutionary algorithm for solving such a game. Empirical results on small matches of µRTS show that our evolutionary approach is able to converge to a Nash equilibrium for the subset selection game. Also, results on larger matches show that search algorithms using action abstractions derived by our evolutionary approach are able to substantially outperform all state-of-the-art planning systems tested.

#10 Real-Time Planning as Decision-Making under Uncertainty [PDF] [Copy] [Kimi]

Authors: Andrew Mitchell ; Wheeler Ruml ; Fabian Spaniol ; Jorg Hoffmann ; Marek Petrik

In real-time planning, an agent must select the next action to take within a fixed time bound. Many popular real-time heuristic search methods approach this by expanding nodes using time-limited A* and selecting the action leading toward the frontier node with the lowest f value. In this paper, we reconsider real-time planning as a problem of decision-making under uncertainty. We propose treating heuristic values as uncertain evidence and we explore several backup methods for aggregating this evidence. We then propose a novel lookahead strategy that expands nodes to minimize risk, the expected regret in case a non-optimal action is chosen. We evaluate these methods in a simple synthetic benchmark and the sliding tile puzzle and find that they outperform previous methods. This work illustrates how uncertainty can arise even when solving deterministic planning problems, due to the inherent ignorance of time-limited search algorithms about those portions of the state space that they have not computed, and how an agent can benefit from explicitly metareasoning about this uncertainty.

#11 Evolving Solutions to Community-Structured Satisfiability Formulas [PDF] [Copy] [Kimi]

Authors: Frank Neumann ; Andrew M. Sutton

We study the ability of a simple mutation-only evolutionary algorithm to solve propositional satisfiability formulas with inherent community structure. We show that the community structure translates to good fitness-distance correlation properties, which implies that the objective function provides a strong signal in the search space for evolutionary algorithms to locate a satisfying assignment efficiently. We prove that when the formula clusters into communities of size s ∈ ω(logn) ∩O(nε/(2ε+2)) for some constant 0 <ε< 1, and there is a nonuniform distribution over communities, a simple evolutionary algorithm called the (1+1) EA finds a satisfying assignment in polynomial time on a 1−o(1) fraction of formulas with at least constant constraint density. This is a significant improvement over recent results on uniform random formulas, on which the same algorithm has only been proven to be efficient on uniform formulas of at least logarithmic density.

#12 Pareto Optimization for Subset Selection with Dynamic Cost Constraints [PDF] [Copy] [Kimi]

Authors: Vahid Roostapour ; Aneta Neumann ; Frank Neumann ; Tobias Friedrich

In this paper, we consider the subset selection problem for function f with constraint bound B which changes over time. We point out that adaptive variants of greedy approaches commonly used in the area of submodular optimization are not able to maintain their approximation quality. Investigating the recently introduced POMC Pareto optimization approach, we show that this algorithm efficiently computes a φ = (αf/2)(1− α1f )-approximation, where αf is the sube modularity ratio of f, for each possible constraint bound b ≤ B. Furthermore, we show that POMC is able to adapt its set of solutions quickly in the case that B increases. Our experimental investigations for the influence maximization in social networks show the advantage of POMC over generalized greedy algorithms.

#13 Stepping Stones to Inductive Synthesis of Low-Level Looping Programs [PDF] [Copy] [Kimi]

Author: Christopher D. Rosin

Inductive program synthesis, from input/output examples, can provide an opportunity to automatically create programs from scratch without presupposing the algorithmic form of the solution. For induction of general programs with loops (as opposed to loop-free programs, or synthesis for domain-specific languages), the state of the art is at the level of introductory programming assignments. Most problems that require algorithmic subtlety, such as fast sorting, have remained out of reach without the benefit of significant problem-specific background knowledge. A key challenge is to identify cues that are available to guide search towards correct looping programs. We present MAKESPEARE, a simple delayed-acceptance hillclimbing method that synthesizes low-level looping programs from input/output examples. During search, delayed acceptance bypasses small gains to identify significantly-improved stepping stone programs that tend to generalize and enable further progress. The method performs well on a set of established benchmarks, and succeeds on the previously unsolved “Collatz Numbers” program synthesis problem. Additional benchmarks include the problem of rapidly sorting integer arrays, in which we observe the emergence of comb sort (a Shell sort variant that is empirically fast). MAKESPEARE has also synthesized a record-setting program on one of the puzzles from the TIS100 assembly language programming game.

#14 Allocating Planning Effort When Actions Expire [PDF] [Copy] [Kimi]

Authors: Shahaf S. Shperberg ; Andrew Coles ; Bence Cserna ; Erez Karpas ; Wheeler Ruml ; Solomon E. Shimony

Making plans that depend on external events can be tricky. For example, an agent considering a partial plan that involves taking a bus must recognize that this partial plan is only viable if completed and selected for execution in time for the agent to arrive at the bus stop. This setting raises the thorny problem of allocating the agent’s planning effort across multiple open search nodes, each of which has an expiration time and an expected completion effort in addition to the usual estimated plan cost. This paper formalizes this metareasoning problem, studies its theoretical properties, and presents several algorithms for solving it. Our theoretical results include a surprising connection to job scheduling, as well as to deliberation scheduling in time-dependent planning. Our empirical results indicate that our algorithms are effective in practice. This work advances our understanding of how heuristic search planners might address realistic problem settings.

#15 Enriching Non-Parametric Bidirectional Search Algorithms [PDF] [Copy] [Kimi]

Authors: Shahaf S. Shperberg ; Ariel Felner ; Nathan R. Sturtevant ; Solomon E. Shimony ; Avi Hayoun

NBS is a non-parametric bidirectional search algorithm proven to expand at most twice the number of node expansions required to verify the optimality of a solution. We introduce new variants of NBS that are aimed at finding all optimal solutions. We then introduce an algorithmic framework that includes NBS as a special case. Finally, we introduce DVCBS, a new algorithm in this framework that aims to further reduce the number of expansions. Unlike NBS, DVCBS does not have any worst-case bound guarantees, but in practice it outperforms NBS in verifying the optimality of solutions.

#16 Bounded Suboptimal Search with Learned Heuristics for Multi-Agent Systems [PDF] [Copy] [Kimi]

Authors: Markus Spies ; Marco Todescato ; Hannes Becker ; Patrick Kesper ; Nicolai Waniek ; Meng Guo

A wide range of discrete planning problems can be solved optimally using graph search algorithms. However, optimal search quickly becomes infeasible with increased complexity of a problem. In such a case, heuristics that guide the planning process towards the goal state can increase performance considerably. Unfortunately, heuristics are often unavailable or need manual and time-consuming engineering. Building upon recent results on applying deep learning to learn generalized reactive policies, we propose to learn heuristics by imitation learning. After learning heuristics based on optimal examples, they are used to guide a classical search algorithm to solve unseen tasks. However, directly applying learned heuristics in search algorithms such as A∗ breaks optimality guarantees, since learned heuristics are not necessarily admissible. Therefore, we (i) propose a novel method that utilizes learned heuristics to guide Focal Search A∗, a variant of A∗ with guarantees on bounded suboptimality; (ii) compare the complexity and performance of jointly learning individual policies for multiple robots with an approach that learns one policy for all robots; (iii) thoroughly examine how learned policies generalize to previously unseen environments and demonstrate considerably improved performance in a simulated complex dynamic coverage problem.

#17 An Improved Generic Bet-and-Run Strategy with Performance Prediction for Stochastic Local Search [PDF] [Copy] [Kimi]

Authors: Thomas Weise ; Zijun Wu ; Markus Wagner

A commonly used strategy for improving optimization algorithms is to restart the algorithm when it is believed to be trapped in an inferior part of the search space. Building on the recent success of BET-AND-RUN approaches for restarted local search solvers, we introduce a more generic version that makes use of performance prediction. It is our goal to obtain the best possible results within a given time budget t using a given black-box optimization algorithm. If no prior knowledge about problem features and algorithm behavior is available, the question about how to use the time budget most efficiently arises. We first start k ≥ 1 independent runs of the algorithm during an initialization budget t1 < t, pause these runs, then apply a decision maker D to choose 1 ≤ m < k runs from them (consuming t2 ≥ 0 time units in doing so), and then continue these runs for the remaining t3 = t−t1−t2 time units. In previous BET-AND-RUN strategies, the decision maker D = currentBest would simply select the run with the best-so-far results at negligible time. We propose using more advanced methods to discriminate between “good” and “bad” sample runs with the goal of increasing the correlation of the chosen run with the a-posteriori best one. In over 157 million experiments, we test different approaches to predict which run may yield the best results if granted the remaining budget. We show (1) that the currentBest method is indeed a very reliable and robust baseline approach, and (2) that our approach can yield better results than the previous methods.

#18 Fuzzy-Classification Assisted Solution Preselection in Evolutionary Optimization [PDF] [Copy] [Kimi]

Authors: Aimin Zhou ; Jinyuan Zhang ; Jianyong Sun ; Guixu Zhang

In evolutionary optimization, the preselection is an efficient operator to improve the search efficiency, which aims to filter unpromising candidate solutions before fitness evaluation. Most existing preselection operators rely on fitness values, surrogate models, or classification models. Basically, the classification based preselection regards the preselection as a classification procedure, i.e., differentiating promising and unpromising candidate solutions. However, the difference between promising and unpromising classes becomes fuzzy as the running process goes on, as all the left solutions are likely to be promising ones. Facing this challenge, this paper proposes a fuzzy classification based preselection (FCPS) scheme, which utilizes the membership function to measure the quality of candidate solutions. The proposed FCPS scheme is applied to two state-of-the-art evolutionary algorithms on a test suite. The experimental results show the potential of FCPS on improving algorithm performance.